Pliny the Prompter posted a jailbreak for OpenAI's newest foundation model, GPT-4o, just a few hours after it was released. The jailbreak allowed users to get the model to output explicit copyrighted lyrics, instructions for making banned items, strategic plans for attacks, and medical advice based on X-rays. Pliny has been finding jailbreaks for leading large language models for around 9 months. This article contains an interview with the prompter where they discuss what motivates them, their approach to jailbreaking a new model, legal ramifications from AI jailbreaking, jailbreaking ethics, and more.